21 research outputs found
A Direct-Sum Theorem for Read-Once Branching Programs
We study a direct-sum question for read-once branching programs. If M(f) denotes the minimum average memory required to compute a function f(x_1,x_2, ..., x_n) how much memory is required to compute f on k independent inputs that arrive in parallel? We show that when the inputs are sampled independently from some domain X and M(f) = Omega(n), then computing the value of f on k streams requires average memory at least Omega(k * M(f)/n).
Our results are obtained by defining new ways to measure the information complexity of read-once branching programs. We define two such measures: the transitional and cumulative information content. We prove that any read-once branching program with transitional information content I can be simulated using average memory O(n(I+1)). On the other hand, if every read-once branching program with cumulative information content I can be simulated with average memory O(I+1), then computing f on k inputs requires average memory at least Omega(k * (M(f)-1))
Exponential Separation between Quantum Communication and Logarithm of Approximate Rank
Chattopadhyay, Mande and Sherif (ECCC 2018) recently exhibited a total
Boolean function, the sink function, that has polynomial approximate rank and
polynomial randomized communication complexity. This gives an exponential
separation between randomized communication complexity and logarithm of the
approximate rank, refuting the log-approximate-rank conjecture. We show that
even the quantum communication complexity of the sink function is polynomial,
thus also refuting the quantum log-approximate-rank conjecture.
Our lower bound is based on the fooling distribution method introduced by Rao
and Sinha (ECCC 2015) for the classical case and extended by Anshu, Touchette,
Yao and Yu (STOC 2017) for the quantum case. We also give a new proof of the
classical lower bound using the fooling distribution method.Comment: The same lower bound has been obtained independently and
simultaneously by Anurag Anshu, Naresh Goud Boddu and Dave Touchett
Lower Bounds on the Complexity of Mixed-Integer Programs for Stable Set and Knapsack
Standard mixed-integer programming formulations for the stable set problem on
-node graphs require integer variables. We prove that this is almost
optimal: We give a family of -node graphs for which every polynomial-size
MIP formulation requires integer variables. By a
polyhedral reduction we obtain an analogous result for -item knapsack
problems. In both cases, this improves the previously known bounds of
by Cevallos, Weltge & Zenklusen (SODA 2018).
To this end, we show that there exists a family of -node graphs whose
stable set polytopes satisfy the following: any -approximate
extended formulation for these polytopes, for some constant ,
has size . Our proof extends and simplifies the
information-theoretic methods due to G\"o\"os, Jain & Watson (FOCS 2016, SIAM
J. Comput. 2018) who showed the same result for the case of exact extended
formulations (i.e. ).Comment: 35 page
Exponential separation between quantum communication and logarithm of approximate rank
Chattopadhyay, Mande and Sherif (ECCC 2018) recently exhibited a total Boolean function, the sink function, that has polynomial approximate rank and polynomial randomized communication complexity. This gives an exponential separation between randomized communication complexity and logarithm of the approximate rank, refuting the log-approximate-rank conjecture. We show that even the quantum communication complexity of the sink function is polynomial, thus also refuting the quantum log-approximate-rank conjecture. Our lower bound is based on the fooling distribution method introduced by Rao and Sinha (ECCC 2015) for the classical case and extended by Anshu, Touchette, Yao and Yu (STOC 2017) for the quantum case. We also give a new proof of the classical lower bound using the fooling distribution method.</p
Majorizing measures for the optimizer
The theory of majorizing measures, extensively developed by Fernique, Talagrand and many others, provides one of the most general frameworks for controlling the behavior of stochastic processes. In particular, it can be applied to derive quantitative bounds on the expected suprema and the degree of continuity of sample paths for many processes. One of the crowning achievements of the theory is Talagrand’s tight alternative characterization of the suprema of Gaussian processes in terms of majorizing measures. The proof of this theorem was difficult, and thus considerable effort was put into the task of developing both shorter and easier to understand proofs. A major reason for this difficulty was considered to be theory of majorizing measures itself, which had the reputation of being opaque and mysterious. As a consequence, most recent treatments of the theory (including by Talagrand himself) have eschewed the use of majorizing measures in favor of a purely combinatorial approach (the generic chaining) where objects based on sequences of partitions provide roughly matching upper and lower bounds on the desired expected supremum. In this paper, we return to majorizing measures as a primary object of study, and give a viewpoint that we think is natural and clarifying from an optimization perspective. As our main contribution, we give an algorithmic proof of the majorizing measures theorem based on two parts: We make the simple (but apparently new) observation that finding the best majorizing measure can be cast as a convex program. This also allows for efficiently computing the measure using off-the-shelf methods from convex optimization. We obtain tree-based upper and lower bound certificates by rounding, in a series of steps, the primal and dual solutions to this convex program. While duality has conceptually been part of the theory since its beginnings, as far as we are aware no explicit link to convex optimization has been previously made
Prefix Discrepancy, Smoothed Analysis, and Combinatorial Vector Balancing
A well-known result of Banaszczyk in discrepancy theory concerns the prefix
discrepancy problem (also known as the signed series problem): given a sequence
of unit vectors in , find signs for each of them such
that the signed sum vector along any prefix has a small -norm?
This problem is central to proving upper bounds for the Steinitz problem, and
the popular Koml\'os problem is a special case where one is only concerned with
the final signed sum vector instead of all prefixes. Banaszczyk gave an
bound for the prefix discrepancy problem. We
investigate the tightness of Banaszczyk's bound and consider natural
generalizations of prefix discrepancy:
We first consider a smoothed analysis setting, where a small amount of
additive noise perturbs the input vectors. We show an exponential improvement
in compared to Banaszczyk's bound. Using a primal-dual approach and a
careful chaining argument, we show that one can achieve a bound of
with high probability in the smoothed setting.
Moreover, this smoothed analysis bound is the best possible without further
improvement on Banaszczyk's bound in the worst case.
We also introduce a generalization of the prefix discrepancy problem where
the discrepancy constraints correspond to paths on a DAG on vertices. We
show that an analog of Banaszczyk's bound continues
to hold in this setting for adversarially given unit vectors and that the
factor is unavoidable for DAGs. We also show that the
dependence on cannot be improved significantly in the smoothed case for
DAGs.
We conclude by exploring a more general notion of vector balancing, which we
call combinatorial vector balancing. We obtain near-optimal bounds in this
setting, up to poly-logarithmic factors.Comment: 22 pages. Appear in ITCS 202
Online Discrepancy Minimization for Stochastic Arrivals
In the stochastic online vector balancing problem, vectors
chosen independently from an arbitrary distribution in
arrive one-by-one and must be immediately given a sign.
The goal is to keep the norm of the discrepancy vector, i.e., the signed
prefix-sum, as small as possible for a given target norm.
We consider some of the most well-known problems in discrepancy theory in the
above online stochastic setting, and give algorithms that match the known
offline bounds up to factors. This substantially
generalizes and improves upon the previous results of Bansal, Jiang, Singla,
and Sinha (STOC' 20). In particular, for the Koml\'{o}s problem where
for each , our algorithm achieves
discrepancy with high probability, improving upon the previous
bound. For Tusn\'{a}dy's problem of minimizing the
discrepancy of axis-aligned boxes, we obtain an bound for
arbitrary distribution over points. Previous techniques only worked for product
distributions and gave a weaker bound. We also consider the
Banaszczyk setting, where given a symmetric convex body with Gaussian
measure at least , our algorithm achieves discrepancy with
respect to the norm given by for input distributions with sub-exponential
tails.
Our key idea is to introduce a potential that also enforces constraints on
how the discrepancy vector evolves, allowing us to maintain certain
anti-concentration properties. For the Banaszczyk setting, we further enhance
this potential by combining it with ideas from generic chaining. Finally, we
also extend these results to the setting of online multi-color discrepancy